Skip to content

Conversation

@pull
Copy link

@pull pull bot commented Jan 22, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

bhavyaus and others added 30 commits November 24, 2025 13:24
This is a rethinking of the initial API proposed in `languageModelToolSupportsModel.d.ts`.

1. This switches to `models?: LanguageModelChatSelector[];` to control enablement.
   definitely open to switching this out, but I think a synchronously-analyzable
   expression is important to retain the data flows in core without too many races.
2. The extension is able to define a tool at runtime via registerToolDefinition. This
   should let us have entirely service-driven tools from model providers without
   requiring a static definition for each one. We can also have model-specific
   variants of tools without a ton of package.json work for each variant of the tool
   (as initially proposed using `when` clauses)

This then propagates that down into the tools service. Currently I have this as just
compiling to a `when` expression once it reaches the main thread. Then, for the tools
service, it takes an IContextKeyService in cases where tools should be enumerated,
and the chat input sets the model keys in its scoped context key service. This allows
the tools to be filtered correctly in the tool picker.

I initially thought about allowing multiple definitions be registered for the same tool
name/id for model-specific variants of tools but I realized that gets really gnarly and
we already have a `toolReferenceName` that multiple tools can register into.

Todo for tomorrow morning:
- Tools don't make it to the ChatRequest yet, or something, still need to investigate
- Need to make sure tools in prompts/models all work. For a first pass I think we can
  let prompts/modes reference all tools by toolReferenceName.
- Validate that multiple tools actually can safely share a reference name (and do
  some priority ordering?)
- General further validation
- Some unit tests
* chat: wip on model-specific tools

This is a rethinking of the initial API proposed in `languageModelToolSupportsModel.d.ts`.

1. This switches to `models?: LanguageModelChatSelector[];` to control enablement.
   definitely open to switching this out, but I think a synchronously-analyzable
   expression is important to retain the data flows in core without too many races.
2. The extension is able to define a tool at runtime via registerToolDefinition. This
   should let us have entirely service-driven tools from model providers without
   requiring a static definition for each one. We can also have model-specific
   variants of tools without a ton of package.json work for each variant of the tool
   (as initially proposed using `when` clauses)

This then propagates that down into the tools service. Currently I have this as just
compiling to a `when` expression once it reaches the main thread. Then, for the tools
service, it takes an IContextKeyService in cases where tools should be enumerated,
and the chat input sets the model keys in its scoped context key service. This allows
the tools to be filtered correctly in the tool picker.

I initially thought about allowing multiple definitions be registered for the same tool
name/id for model-specific variants of tools but I realized that gets really gnarly and
we already have a `toolReferenceName` that multiple tools can register into.

Todo for tomorrow morning:
- Tools don't make it to the ChatRequest yet, or something, still need to investigate
- Need to make sure tools in prompts/models all work. For a first pass I think we can
  let prompts/modes reference all tools by toolReferenceName.
- Validate that multiple tools actually can safely share a reference name (and do
  some priority ordering?)
- General further validation
- Some unit tests

* key selected tools based on reference name
…sionsWelcome.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
osortega and others added 27 commits January 22, 2026 09:50
* Add usage data to chat result api

* Start prototyping status icon

* Cleanup old context widget

* Cleanup widget

* Grow outwards

* Some cleanups and refacotors

* Make actions contributable

* Add action header

* Cleanup action rendering

* Add sub categories of token usage

* cleanup diff

* Handle copilot feedback
* Log usage of Add Element to Chat

* Small change
…d styles (#289701)

* fix: adjust position calculation for inline chat widget when not anchored above

* fix: improve position calculation for inline chat session overlay based on selection and diff changes

* fix: update inline chat session overlay to improve progress indication and remove pulse animation

* fix: rename progress elements to status for clarity and update related styles
* chore: fix casing

* chore: fix more casing
…ions (#289731)

* setting for attaching applying and referenced settings

* update

* tests

* fix tests

* Update src/vs/workbench/contrib/chat/common/promptSyntax/utils/promptFilesLocator.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix

* fix

* fix

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…ew-setting

Add inlineChat.defaultModel setting
…grated Browser (#289565)

* Setting to open localhost links in Integrated Browser

* Apply suggestion from @Copilot

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Rename

* Log telemetry for open method

* Distinguish commands with and without URL

* use isLocalhostAuthority in other use too

* Add descriptions for source values

* PR Feedback

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@pull pull bot locked and limited conversation to collaborators Jan 22, 2026
@pull pull bot added the ⤵️ pull label Jan 22, 2026
@pull pull bot merged commit 108524f into code:main Jan 22, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.